最近,行动识别因其在智能监视和人为计算机互动方面的全面和实用应用而受到了越来越多的关注。但是,由于数据稀缺性,很少有射击动作识别并未得到充分的探索,并且仍然具有挑战性。在本文中,我们提出了一种新型的分层组成表示(HCR)学习方法,以进行几次识别。具体而言,我们通过精心设计的层次聚类将复杂的动作分为几个子行动,并将子动作进一步分解为更细粒度的空间注意力亚actions(SAS-Actions)。尽管基类和新颖类之间存在很大的差异,但它们可以在子行动或SAS行为中共享相似的模式。此外,我们在运输问题中采用了地球移动器的距离,以测量视频样本之间的相似性在亚行动表示方面。它计算为距离度量的子行动之间的最佳匹配流,这有利于比较细粒模式。广泛的实验表明,我们的方法在HMDB51,UCF101和动力学数据集上实现了最新结果。
translated by 谷歌翻译
旋转不变的面部检测,即用任意旋转平面(RIP)角度的检测面,广泛需要在无约束的应用中被广泛地需要,但由于面部出现的较大变化,仍然仍然是一个具有挑战性的任务。大多数现有方法符合速度或准确性以处理大的撕裂变体。为了更有效地解决这个问题,我们提出了逐步校准网络(PCN)以粗略的方式执行旋转不变的面部检测。 PCN由三个阶段组成,每个阶段不仅将面与非面孔区分开,而且还校准了每个面部候选者的RIP方向逐渐直立。通过将校准过程划分为几个渐进步骤,并且仅预测早期阶段中的粗定向,PCN可以实现精确且快速校准。通过对脸部与逐渐减小的RIP范围进行二进制分类,PCN可以准确地检测满360 ^ {\ rIC} $ RIP角度的面部。这种设计导致实时旋转不变面检测器。在野外的多面向FDDB的实验和疯狂旋转面的较宽面的具有挑战性的子集表明我们的PCN实现了非常有希望的性能。
translated by 谷歌翻译
视觉(RE)本地化解决了估计已知场景中捕获的查询图像的6-DOF(自由度)摄像头的问题,该镜头是许多计算机视觉和机器人应用程序的关键构建块。基于结构的本地化的最新进展通过记住从图像像素到场景坐标的映射与神经网络的映射来构建相机姿势优化的2D-3D对应关系。但是,这种记忆需要在每个场景中训练大量的图像,这是沉重效率降低的。相反,通常很少的图像足以覆盖场景的主要区域,以便人类操作员执行视觉定位。在本文中,我们提出了一种场景区域分类方法,以实现几乎没有拍摄图像的快速有效的场景记忆。我们的见解是利用a)预测的特征提取器,b)场景区域分类器和c)元学习策略,以加速培训,同时缓解过度拟合。我们在室内和室外基准上评估了我们的方法。该实验验证了我们方法在几次设置中的有效性,并且训练时间大大减少到只有几分钟。代码可用:\ url {https://github.com/siyandong/src}
translated by 谷歌翻译
从欧几里德绿色函数中重建频谱函数是物理学中的重要逆问题。特定物理系统的先验知识通常提供了用于求解不良问题的基本正则化方案。针对这一点,我们提出了一种自动差异框架作为从可观察数据重建的通用工具。我们代表神经网络的光谱,并将Chi-Square设置为损耗功能,以优化反向自动分化的参数。在培训过程中,除了正定的形式之外,没有明确的物理预先嵌入神经网络。通过Kullback-Leibler(KL)发散和均方误差(MSE)进行评估重建精度,在多个噪声水平。应当注意,自动差分框架和引入正则化的自由是本方法的固有优势,可能导致在未来解决逆问题的改进。
translated by 谷歌翻译
从欧几里德绿色的功能重建光谱函数是许多身体物理中的重要逆问题。然而,在具有嘈杂的绿色功能的现实系统中证明了反演。在这封信中,我们提出了一种自动分化(AD)框架作为来自传播者可观察到的光谱重建的通用工具。利用神经网络的正则化作为光谱功能的非局部平滑度调节器,我们代表神经网络的光谱功能,并使用传播者的重建误差来优化无限制的网络参数。在培训过程中,除了光谱函数的正面明确形式外,没有嵌入到神经网络中的其他显式物理前沿。通过相对熵和均方误差来评估重建性能,对于两个不同的网络表示。与最大熵方法相比,广告框架在大噪声情况下实现了更好的性能。注意,引入非局部正则化的自由是本框架的固有优势,并且可能导致求解逆问题的显着改进。
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译
Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN classification using these discretely calibrated support data. Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.
translated by 谷歌翻译